近年来,基于脑电图的情绪识别的进步已受到人机相互作用和认知科学领域的广泛关注。但是,如何用有限的标签识别情绪已成为一种新的研究和应用瓶颈。为了解决这个问题,本文提出了一个基于人类中刺激一致的脑电图信号的自我监督组减数分裂对比学习框架(SGMC)。在SGMC中,开发了一种新型遗传学启发的数据增强方法,称为减数分裂。它利用了组中脑电图样品之间的刺激对齐,通过配对,交换和分离来生成增强组。该模型采用组投影仪,从相同的情感视频刺激触发的脑电图样本中提取组级特征表示。然后,使用对比度学习来最大程度地提高具有相同刺激的增强群体的组级表示的相似性。 SGMC在公开可用的DEAP数据集上实现了最先进的情感识别结果,其价值为94.72%和95.68%的价和唤醒维度,并且在公共种子数据集上的竞争性能也具有94.04的竞争性能。 %。值得注意的是,即使使用有限的标签,SGMC也会显示出明显的性能。此外,功能可视化的结果表明,该模型可能已经学习了与情感相关的特征表示,以改善情绪识别。在超级参数分析中进一步评估了组大小的影响。最后,进行了对照实验和消融研究以检查建筑的合理性。该代码是在线公开提供的。
translated by 谷歌翻译
迄今为止,最强大的半监督对象检测器(SS-OD)基于伪盒,该盒子需要一系列带有微调超参数的后处理。在这项工作中,我们建议用稀疏的伪盒子以伪造的伪标签形式取代稀疏的伪盒。与伪盒相比,我们的密集伪标签(DPL)不涉及任何后处理方法,因此保留了更丰富的信息。我们还引入了一种区域选择技术,以突出关键信息,同时抑制密集标签所携带的噪声。我们将利用DPL作为密集老师的拟议的SS-OD算法命名。在可可和VOC上,密集的老师在各种环境下与基于伪盒的方法相比表现出卓越的表现。
translated by 谷歌翻译
人工智能和神经科学都深受互动。人工神经网络(ANNS)是一种多功能的工具,用于研究腹侧视觉流中的神经表现,以及神经科学中的知识返回激发了ANN模型,以提高任务的性能。但是,如何将这两个方向合并到统一模型中较少研究。这里,我们提出了一种混合模型,称为深度自动编码器,具有神经响应(DAE-NR),其将来自视觉皮质的信息包含在ANN中,以实现生物和人造神经元之间的更好的图像重建和更高的神经表示相似性。具体地,对小鼠脑和DAE-NR的输入相同的视觉刺激(即自然图像)。 DAE-NR共同学会通过映射函数将编码器网络的特定层映射到腹侧视觉流中的生物神经响应,并通过解码器重建视觉输入。我们的实验表明,如果只有在联合学习,DAE-NRS可以(i)可以提高图像重建的性能,并且(ii)增加生物神经元和人工神经元之间的代表性相似性。 DAE-NR提供了一种关于计算机视觉和视觉神经科学集成的新视角。
translated by 谷歌翻译
The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.
translated by 谷歌翻译
The high feature dimensionality is a challenge in music emotion recognition. There is no common consensus on a relation between audio features and emotion. The MER system uses all available features to recognize emotion; however, this is not an optimal solution since it contains irrelevant data acting as noise. In this paper, we introduce a feature selection approach to eliminate redundant features for MER. We created a Selected Feature Set (SFS) based on the feature selection algorithm (FSA) and benchmarked it by training with two models, Support Vector Regression (SVR) and Random Forest (RF) and comparing them against with using the Complete Feature Set (CFS). The result indicates that the performance of MER has improved for both Random Forest (RF) and Support Vector Regression (SVR) models by using SFS. We found using FSA can improve performance in all scenarios, and it has potential benefits for model efficiency and stability for MER task.
translated by 谷歌翻译
Autonomous cars are indispensable when humans go further down the hands-free route. Although existing literature highlights that the acceptance of the autonomous car will increase if it drives in a human-like manner, sparse research offers the naturalistic experience from a passenger's seat perspective to examine the human likeness of current autonomous cars. The present study tested whether the AI driver could create a human-like ride experience for passengers based on 69 participants' feedback in a real-road scenario. We designed a ride experience-based version of the non-verbal Turing test for automated driving. Participants rode in autonomous cars (driven by either human or AI drivers) as a passenger and judged whether the driver was human or AI. The AI driver failed to pass our test because passengers detected the AI driver above chance. In contrast, when the human driver drove the car, the passengers' judgement was around chance. We further investigated how human passengers ascribe humanness in our test. Based on Lewin's field theory, we advanced a computational model combining signal detection theory with pre-trained language models to predict passengers' humanness rating behaviour. We employed affective transition between pre-study baseline emotions and corresponding post-stage emotions as the signal strength of our model. Results showed that the passengers' ascription of humanness would increase with the greater affective transition. Our study suggested an important role of affective transition in passengers' ascription of humanness, which might become a future direction for autonomous driving.
translated by 谷歌翻译
Detecting abnormal crowd motion emerging from complex interactions of individuals is paramount to ensure the safety of crowds. Crowd-level abnormal behaviors (CABs), e.g., counter flow and crowd turbulence, are proven to be the crucial causes of many crowd disasters. In the recent decade, video anomaly detection (VAD) techniques have achieved remarkable success in detecting individual-level abnormal behaviors (e.g., sudden running, fighting and stealing), but research on VAD for CABs is rather limited. Unlike individual-level anomaly, CABs usually do not exhibit salient difference from the normal behaviors when observed locally, and the scale of CABs could vary from one scenario to another. In this paper, we present a systematic study to tackle the important problem of VAD for CABs with a novel crowd motion learning framework, multi-scale motion consistency network (MSMC-Net). MSMC-Net first captures the spatial and temporal crowd motion consistency information in a graph representation. Then, it simultaneously trains multiple feature graphs constructed at different scales to capture rich crowd patterns. An attention network is used to adaptively fuse the multi-scale features for better CAB detection. For the empirical study, we consider three large-scale crowd event datasets, UMN, Hajj and Love Parade. Experimental results show that MSMC-Net could substantially improve the state-of-the-art performance on all the datasets.
translated by 谷歌翻译
Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
translated by 谷歌翻译
事实证明,大脑时代是与认知性能和脑部疾病相关的表型。实现准确的脑年龄预测是优化预测的脑时代差异作为生物标志物的必要先决条件。作为一种综合的生物学特征,很难使用特征工程和局部处理的模型来准确利用大脑时代,例如局部卷积和经常性操作,这些操作一次是一次处理一个本地社区。取而代之的是,视觉变形金刚学习斑块令牌的全球专注相互作用,引入了较少的电感偏见和建模长期依赖性。就此而言,我们提出了一个新的网络,用于学习大脑年龄,以全球和局部依赖性解释,其中相应的表示由连续排列的变压器(SPT)和卷积块捕获。 SPT带来了计算效率,并通过从不同视图中连续编码2D切片间接地定位3D空间信息。最后,我们收集了一大批22645名受试者,年龄范围从14到97,我们的网络在一系列深度学习方法中表现最好,在验证集中产生了平均绝对错误(MAE)为2.855,而在独立方面产生了2.911测试集。
translated by 谷歌翻译
基于模型的单图像去悬算算法恢复了带有尖锐边缘的无雾图像和真实世界的朦胧图像的丰富细节,但以低psnr和ssim值的牺牲来为合成朦胧的图像。数据驱动的图像恢复具有高PSNR和SSIM值的无雾图图像,用于合成朦胧的图像,但对比度低,甚至对于现实世界中的朦胧图像而言,甚至剩下的雾霾。在本文中,通过组合基于模型和数据驱动的方法来引入一种新型的单图像飞行算法。传输图和大气光都是首先通过基于模型的方法估算的,然后通过基于双尺度生成对抗网络(GAN)的方法进行完善。所得算法形成一种神经增强,在相应的数据驱动方法可能不会收敛的同时,该算法的收敛非常快。通过使用估计的传输图和大气光以及KoschmiederLaw来恢复无雾图像。实验结果表明,所提出的算法可以从现实世界和合成的朦胧图像中井除雾霾。
translated by 谷歌翻译